On Frank-Wolfe and Equilibrium Computation (Supplementary)
نویسندگان
چکیده
The celebrated minimax theorem for zero-sum games, first discovered by John von Neumann in the 1920s [14, 10], is certainly a foundational result in the theory of games. It states that two players, playing a game with zero-sum payoffs, each have an optimal randomized strategy that can be played obliviously – that is, even announcing their strategy in advance to an optimal opponent would not damage their own respective payoff, in expectation. Or, if you are not fond of game theory, the statement can be stated quite simply in terms of the order of operations of a particular two-stage optimization problem: letting ∆n and ∆m be the n− and m−probability simplex, respectively, and let M ∈ Rn×m be any real-valued matrix, then we have
منابع مشابه
On Frank-Wolfe and Equilibrium Computation
We consider the Frank-Wolfe (FW) method for constrained convex optimization, and we show that this classical technique can be interpreted from a different perspective: FW emerges as the computation of an equilibrium (saddle point) of a special convex-concave zero sum game. This saddle-point trick relies on the existence of no-regret online learning to both generate a sequence of iterates but al...
متن کاملApplication of Particle Swarm Optimization and Genetic Algorithm Techniques to Solve Bi-level Congestion Pricing Problems
The solutions used to solve bi-level congestion pricing problems are usually based on heuristic network optimization methods which may not be able to find the best solution for these type of problems. The application of meta-heuristic methods can be seen as viable alternative solutions but so far, it has not received enough attention by researchers in this field. Therefore, the objective of thi...
متن کاملLinear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls
We propose a rank-k variant of the classical Frank-Wolfe algorithm to solve convex optimization over a trace-norm ball. Our algorithm replaces the top singular-vector computation (1-SVD) in Frank-Wolfe with a top-k singular-vector computation (k-SVD), which can be done by repeatedly applying 1-SVD k times. Our algorithm has a linear convergence rate when the objective function is smooth and str...
متن کاملNew analysis and results for the Frank-Wolfe method
We present new results for the Frank-Wolfe method (also known as the conditional gradient method). We derive computational guarantees for arbitrary step-size sequences, which are then applied to various step-size rules, including simple averaging and constant step-sizes. We also develop step-size rules and computational guarantees that depend naturally on the warm-start quality of the initial (...
متن کاملPartial Linearization Based Optimization for Multi-class SVM
We propose a novel partial linearization based approach for optimizing the multi-class svm learning problem. Our method is an intuitive generalization of the Frank-Wolfe and the exponentiated gradient algorithms. In particular, it allows us to combine several of their desirable qualities into one approach: (i) the use of an expectation oracle (which provides the marginals over each output class...
متن کامل